This tutorial covers data access to a pre-prepared curation of connectome data for the major Drosophila connectome projects. Our curated data includes:
Currently working with dataset: banc_746 Data location: gs://sjcabs_2025_data (Google Cloud Storage)
If using Google Cloud Storage (data_path starts with
gs://), we use Python’s gcsfs library (via
reticulate) to stream files directly into R memory. This
approach:
Authentication required: Before running this tutorial with GCS, authenticate with Google Cloud:
# Install gcloud CLI if you haven't already:
# https://cloud.google.com/sdk/docs/install
# Authenticate with your Google account
gcloud auth application-default login
# Follow the prompts in your browser to authenticate
This creates credentials that gcsfs will use
automatically.
If using a local path, data is read directly from disk using Arrow - no additional setup required!
Packages are loaded from packages.R:
# Core packages loaded: arrow, tidyverse, ggplot2, patchwork
# (See packages.R for details)
# Setup GCS access if needed (using custom setup_gcs_access() function)
if (use_gcs) {
setup_gcs_access()
}
## Setting up GCS access...
## Using conda Python environment...
## ✓ GCS access configured
## ✓ Python warnings suppressed
# Set ggplot theme for aesthetic plots
theme_set(theme_minimal(base_size = 12) +
theme(
plot.title = element_text(face = "bold", size = 14),
plot.subtitle = element_text(size = 11),
axis.text = element_text(size = 10),
legend.position = "right",
panel.grid.minor = element_blank()
))
This tutorial supports two data access modes:
Access data directly from Google Cloud Storage - no manual download required! - Pros: No local storage needed, always up-to-date - Cons: Slower (3-5 min for large files), requires authentication & internet
GCS bucket location:
gs://sjcabs_2025_data/
Download data once with gsutil, then access locally:
# Download specific dataset (e.g., BANC metadata + synapses)
gsutil -m cp gs://sjcabs_2025_data/banc/banc_746_meta.feather ~/data/sjcabs_data/banc/
gsutil -m cp gs://sjcabs_2025_data/banc/banc_746_synapses.parquet ~/data/sjcabs_data/banc/
# Or download entire dataset directory
gsutil -m cp -r gs://sjcabs_2025_data/banc ~/data/sjcabs_data/
# Or download all datasets (~500 GB total)
gsutil -m cp -r gs://sjcabs_2025_data ~/data/
Then update the setup chunk:
data_path <- "~/data/sjcabs_data" # Use your local path
Pros: Much faster (seconds instead of minutes), Parquet lazy loading still works! Cons: Requires ~50-100 GB disk space per dataset
Note: Even with local files, Parquet’s lazy loading means you don’t need to load entire files into RAM!
gs://sjcabs_2025_data/banc/
gs://sjcabs_2025_data/fafb/
gs://sjcabs_2025_data/manc/
gs://sjcabs_2025_data/hemibrain/
gs://sjcabs_2025_data/malecns/
Our data files use two Apache Arrow formats: -
Feather (.feather) for metadata - smaller
files (~10 MB), loaded entirely into memory - Parquet
(.parquet) for synapses - large files (4-15 GB), supports
lazy loading and predicate pushdown
Why Parquet for synapses? - ✓ Column selection: download only needed columns - ✓ Row filtering: filter on the server before downloading - ✓ Compression: smaller file sizes - ✓ Efficient for analytical queries on large datasets
Setup paths and GCS filesystem (if needed):
# Setup GCS filesystem if using GCS (using custom setup_gcs_filesystem() function)
gcs_fs <- NULL
if (use_gcs) {
gcs_fs <- setup_gcs_filesystem()
}
## Authenticating with Google Cloud Storage...
# Construct file paths using custom construct_path() function
# Meta files are Feather, synapse files are Parquet
meta_path <- construct_path(data_path, dataset, "meta")
synapse_path <- construct_path(data_path, dataset, "synapses")
meta_full <- read_feather_smart(meta_path, gcs_filesystem = gcs_fs)
## Reading from GCS: gs://sjcabs_2025_data/banc/banc_746_meta.feather
## Downloading from GCS... (this may take several minutes for large files)
## Downloaded 9.84 MB from GCS
## Loading into memory with Arrow...
## ✓ Done! Loaded 168791 rows
For metadata (~10 MB), we can load the entire dataset into memory:
# Use the metadata we already loaded
meta <- meta_full
# Show summary and columns
cat("Dataset:", dataset, "-", nrow(meta), "neurons,", ncol(meta), "columns\n")
## Dataset: banc_746 - 168791 neurons, 18 columns
print(colnames(meta))
## [1] "banc_746_id" "supervoxel_id"
## [3] "region" "side"
## [5] "hemilineage" "nerve"
## [7] "flow" "super_class"
## [9] "cell_class" "cell_sub_class"
## [11] "cell_type" "neurotransmitter_predicted"
## [13] "neurotransmitter_score" "cell_function"
## [15] "cell_function_detailed" "body_part_sensory"
## [17] "body_part_effector" "status"
This meta data table contains all of the “identified” neurons in the dataset.
You may encounter neuron IDs outside of this meta data table, in e.g. the synapse table.
Those are “fragments” that have not been linked up to full neurons.
Let’s get our list of “proofread” identified neurons, as they are what we will want for analysis, mainly.
# Use the metadata we already loaded
# Dataset ID - change this ID for different datasets, e.g. the BANC data is: banc_746_id
proofread_ids <- na.omit(unique(meta_full[[dataset_id]]))
# Show column names
print(length(proofread_ids))
## [1] 168791
The metadata uses a hierarchical classification system. See the full schema here. This is based largely on the hierarchical scheme developed in Schlegel et al., 2024, see here.
Hierarchy: flow → super_class → cell_class → cell_sub_class → cell_type
# Count neurons by classification level
flow_counts <- meta %>%
count(flow, sort = TRUE) %>%
filter(!is.na(flow))
super_counts <- meta %>%
count(super_class, sort = TRUE) %>%
filter(!is.na(super_class))
class_counts <- meta %>%
count(cell_class, sort = TRUE) %>%
filter(!is.na(cell_class))
print(flow_counts)
## # A tibble: 3 × 2
## flow n
## <chr> <int>
## 1 intrinsic 82299
## 2 afferent 15462
## 3 efferent 1031
print(head(super_counts, 10))
## # A tibble: 10 × 2
## super_class n
## <chr> <int>
## 1 optic_lobe_intrinsic 31656
## 2 central_brain_intrinsic 29068
## 3 sensory 14940
## 4 ventral_nerve_cord_intrinsic 12785
## 5 visual_projection 5765
## 6 ascending 1839
## 7 descending 1312
## 8 motor 836
## 9 sensory_ascending 512
## 10 visual_centrifugal 449
print(head(class_counts, 10))
## # A tibble: 10 × 2
## cell_class n
## <chr> <int>
## 1 transverse_neuron 9547
## 2 transmedullary 5233
## 3 bristle_neuron 5188
## 4 kenyon_cell 4316
## 5 lamina_monopolar 3747
## 6 medulla_intrinsic 3558
## 7 lobula_columnar 3250
## 8 distal_medulla 2914
## 9 single_leg_neuromere 2844
## 10 olfactory_receptor_neuron 2170
Neurotransmitter predictions are based on Eckstein & Bates et al. (2024) Cell.
# Count neurotransmitter predictions
nt_counts <- meta %>%
count(neurotransmitter_predicted) %>%
filter(!is.na(neurotransmitter_predicted)) %>%
arrange(desc(n)) %>%
mutate(neurotransmitter_predicted = fct_reorder(neurotransmitter_predicted, n))
# Create plot
p_nt <- ggplot(nt_counts, aes(x = neurotransmitter_predicted, y = n, fill = neurotransmitter_predicted)) +
geom_col(show.legend = FALSE) +
geom_text(aes(label = scales::comma(n)), hjust = -0.2, size = 3.5) +
scale_fill_brewer(palette = "Set2") +
scale_y_continuous(expand = expansion(mult = c(0, 0.15)), labels = scales::comma) +
coord_flip() +
labs(
title = paste("Neurotransmitter Predictions:", dataset),
subtitle = "Based on Eckstein & Bates et al. (2024)",
x = "Predicted Neurotransmitter",
y = "Number of Neurons"
)
# Save and display plot
save_plot(p_nt, "neurotransmitter_distribution")
ggplotly(p_nt)
Our meta data follows a hierarchical scheme (i.e. flow,
super_class, cell_class,
cell_sub_class, cell_type), with additional
non-hierarchical labels (e.g. neurotransmitter_predicted,
nerve, hemilineage).
Kenyon cells are the principal neurons of the insect mushroom body, forming parallel pathways for associative memory. They integrate multi-sensory (but mostly olfactory) information and can number in the thousands per fly brain.
Let’s load the metadata and filter for Kenyon cells:
# Read metadata into memory (using custom read_feather_smart() function)
meta_full <- read_feather_smart(meta_path, gcs_filesystem = gcs_fs)
## Reading from GCS: gs://sjcabs_2025_data/banc/banc_746_meta.feather
## Downloading from GCS... (this may take several minutes for large files)
## Downloaded 9.84 MB from GCS
## Loading into memory with Arrow...
## ✓ Done! Loaded 168791 rows
# Filter for Kenyon cells
kenyon_cells <- meta_full %>%
filter(str_detect(cell_class, "kenyon_cell"))
cat("Found", nrow(kenyon_cells), "Kenyon cells in", dataset, "\n")
## Found 4316 Kenyon cells in banc_746
head(kenyon_cells)
## # A tibble: 6 × 18
## banc_746_id supervoxel_id region side hemilineage nerve flow super_class
## <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
## 1 72057594147069… 755767194484… centr… right <NA> <NA> intr… central_br…
## 2 72057594144202… 752257692153… centr… right <NA> <NA> intr… central_br…
## 3 72057594146680… 752257005630… centr… right <NA> <NA> intr… central_br…
## 4 72057594163925… 754368754476… centr… right <NA> <NA> intr… central_br…
## 5 72057594151857… 752960005204… centr… right <NA> <NA> intr… central_br…
## 6 72057594162876… 752960692400… centr… right <NA> <NA> intr… central_br…
## # ℹ 10 more variables: cell_class <chr>, cell_sub_class <chr>, cell_type <chr>,
## # neurotransmitter_predicted <chr>, neurotransmitter_score <dbl>,
## # cell_function <chr>, cell_function_detailed <chr>, body_part_sensory <chr>,
## # body_part_effector <chr>, status <chr>
Synapse files can be very large (4-15 GB for full datasets). For this tutorial, we’ll use pre-filtered subsets focusing on specific brain regions, which are much faster to load.
For workshop use: We provide pre-filtered synapse data in Feather format for common regions like the mushroom body. This avoids the 10-20 minute wait for downloading and filtering large Parquet files.
For advanced use: If you need to query the full synapse dataset or filter by custom criteria, see the commented code at the end of this section for Parquet querying approaches.
Our synapses have been roughly mapped to “neuropils”, which are human-determined regions of the nervous system. These determinations are based on lumps and grooves on the surface of neural tissue and boundaries in synapse densities, but they roughly correlate with functional circuits. At least in some cases.
Our brain neuropils are transformed into connectome spaces from Ito et al., 2014’s demarcations at light-level, see here. See below.
This means that the volumes can be slightly the wrong shape, and slightly shifted by some microns in space. As a consequence, neurons that are not actually in the canonical mushroom body calyx are caught by our search.
Neuropils are simply helpful guides through the nervous systems, like countries on a map. Countries correlate with geography but if you want to understand geology, you generally ignore their human-made borders. Likewise, in connectomics, neuropils are guides that set your sites on the right location, but real answers come from connectivity, and thinking about your results.
Our ventral nerve cord neuropils come from Court et al. 2020, see here. See below.
The mushroom body (MB) is the insect brain structure for associative learning and memory. The mushroom body calyx (MB_CA) is the primary input region of the mushroom body, where Kenyon cells receive olfactory and other sensory information from projection neurons. For performance, we’ll focus on the calyx rather than the entire mushroom body structure.
Let’s extract MB calyx synapses using regex pattern matching.
For more details on mushroom body organization and function, see Li et al. 2020 and Aso et al. 2014.
We provide a pre-filtered feather file at:
banc/mushroom_body/banc_746_mushroom_body_synapses.feather
# Load pre-filtered mushroom body synapses from feather file
# This is MUCH faster than querying the full 4-15 GB Parquet file!
dataset_base <- sub("_[0-9]+$", "", dataset)
mb_synapses_path <- file.path(data_path, dataset_base, "mushroom_body", paste0(dataset, "_mushroom_body_synapses.feather"))
mb_synapses <- read_feather_smart(mb_synapses_path, gcs_filesystem = gcs_fs)
## Reading from GCS: gs://sjcabs_2025_data/banc/mushroom_body/banc_746_mushroom_body_synapses.feather
## Downloading from GCS... (this may take several minutes for large files)
## Downloaded 230.79 MB from GCS
## Loading into memory with Arrow...
## ✓ Done! Loaded 2377573 rows
# Filter for right side and proofread neurons
mb_synapses <- mb_synapses %>%
filter(side == "right",
pre %in% proofread_ids | post %in% proofread_ids)
cat("Unique presynaptic neurons/fragments:", n_distinct(mb_synapses$pre), "\n")
## Unique presynaptic neurons/fragments: 80343
cat("Unique postsynaptic neurons/fragments:", n_distinct(mb_synapses$post), "\n")
## Unique postsynaptic neurons/fragments: 869700
For reference, here’s how you would query the full synapse Parquet file if needed. This is commented out because it takes 10-20 minutes to download and filter the multi-GB file:
# NOTE: This code is for reference only - it takes 10-20 minutes to run!
#
# synapse_path <- construct_path(data_path, dataset, "synapses") # Full Parquet file
#
# if (use_gcs) {
# # GCS APPROACH: Server-side filtering with DuckDB
# # DuckDB reads directly from GCS with predicate pushdown
# cat("\nQuerying GCS Parquet with DuckDB server-side filtering...\n")
#
# # Query with server-side filtering (using custom query_parquet_gcs() function)
# mb_synapses <- query_parquet_gcs(
# path = synapse_path,
# gcs_filesystem = gcs_fs,
# filters = "neuropil LIKE 'MB_CA%' AND side = 'right'", # SQL WHERE clause
# columns = c("id", "pre", "post", "neuropil", "side")
# )
#
# } else {
# # LOCAL APPROACH: Arrow lazy evaluation
# cat("\nQuerying local Parquet with lazy evaluation...\n")
#
# # Open dataset lazily
# synapses_ds <- open_dataset_lazy(synapse_path, format = "parquet")
#
# # Define and execute query
# mb_synapses <- synapses_ds %>%
# filter(str_detect(neuropil, "^MB_CA"),
# side == "right",
# pre %in% proofread_ids | post %in% proofread_ids) %>%
# select(id, pre, post, neuropil, side) %>%
# collect()
# }
Define MB calyx neurons as those with ≥100 synapses (inputs or outputs) within the MB calyx.
As noted before, neuropils are just guides: the real way to define Calyx neurons, is as neurons that input the dendrites of Kenyon cells. So we will add that filter as well.
# Get kenyon cell ids
kc_ids <- meta %>%
filter(cell_class == "kenyon_cell") %>%
pull(.data[[paste0(dataset, "_id")]])
# Count outputs per neuron
mb_outputs <- mb_synapses %>%
filter(pre %in% proofread_ids & (pre %in% kc_ids | post %in% kc_ids)) %>%
count(pre, name = "n_outputs") %>%
filter(n_outputs >= 100)
# Count inputs per neuron
mb_inputs <- mb_synapses %>%
filter(post %in% proofread_ids & (pre %in% kc_ids | post %in% kc_ids)) %>%
count(post, name = "n_inputs") %>%
filter(n_inputs >= 100)
# Combine to get all MB neurons
mb_neurons <- unique(c(mb_outputs$pre, mb_inputs$post))
cat("Neurons with ≥100 synapses in MB:", length(mb_neurons), "\n")
## Neurons with ≥100 synapses in MB: 1843
# Check how many are Kenyon cells
# Use full dataset name with version for ID column (e.g., "banc_746_id")
mb_meta <- meta %>%
filter(get(paste0(dataset, "_id")) %in% mb_neurons)
n_kc <- sum(str_detect(mb_meta$cell_class, "kenyon_cell"), na.rm = TRUE)
n_other <- nrow(mb_meta) - n_kc
cat(" Kenyon cells:", n_kc, "\n")
## Kenyon cells: 1685
cat(" Other neurons:", n_other, "\n")
## Other neurons: 158
What other neuron types are present in the mushroom body calyx?
# Prepare data
mb_meta_clean <- mb_meta %>%
filter(!is.na(cell_type)) %>%
mutate(
super_class = replace_na(super_class, "other"),
cell_class = replace_na(cell_class, "other"),
cell_sub_class = replace_na(cell_sub_class, "other"),
is_kenyon = str_detect(cell_class, "kenyon_cell")
) %>%
filter(!is_kenyon) # Focus on non-Kenyon cells
# Count by classification levels
super_counts <- mb_meta_clean %>% count(super_class, sort = TRUE)
class_counts <- mb_meta_clean %>% count(cell_class, sort = TRUE) %>% head(15)
subclass_counts <- mb_meta_clean %>% count(cell_sub_class, sort = TRUE) %>% head(15)
# Plot super_class
p1 <- ggplot(super_counts, aes(x = fct_reorder(super_class, n), y = n, fill = super_class)) +
geom_col(show.legend = FALSE) +
geom_text(aes(label = n), hjust = -0.2, size = 3) +
scale_fill_brewer(palette = "Set3") +
scale_y_continuous(expand = expansion(mult = c(0, 0.15))) +
coord_flip() +
labs(title = "Non-Kenyon MB Neurons: Super Class", x = NULL, y = "Count")
# Plot cell_class (top 15)
p2 <- ggplot(class_counts, aes(x = fct_reorder(cell_class, n), y = n, fill = cell_class)) +
geom_col(show.legend = FALSE) +
geom_text(aes(label = n), hjust = -0.2, size = 3) +
scale_fill_viridis_d(option = "viridis") +
scale_y_continuous(expand = expansion(mult = c(0, 0.15))) +
coord_flip() +
labs(title = "Cell Class (Top 15)", x = NULL, y = "Count")
# Plot cell_sub_class (top 15)
p3 <- ggplot(subclass_counts, aes(x = fct_reorder(cell_sub_class, n), y = n, fill = cell_sub_class)) +
geom_col(show.legend = FALSE) +
geom_text(aes(label = n), hjust = -0.2, size = 3) +
scale_fill_viridis_d(option = "plasma") +
scale_y_continuous(expand = expansion(mult = c(0, 0.15))) +
coord_flip() +
labs(title = "Cell Sub-Class (Top 15)", x = NULL, y = "Count")
# Save and display plots separately (each plot is independent)
save_plot(p1, "mb_neurons_super_class")
ggplotly(p1)
save_plot(p2, "mb_neurons_cell_class")
ggplotly(p2)
save_plot(p3, "mb_neurons_cell_subclass")
ggplotly(p3)
Here we create a summary comparing Kenyon vs non-Kenyon MB neurons:
# Prepare summary data
mb_summary <- data.frame(
Category = c("Kenyon Cells", "Other MB Neurons"),
Count = c(n_kc, n_other),
Percentage = c(n_kc / (n_kc + n_other) * 100,
n_other / (n_kc + n_other) * 100)
)
# Create summary plot with ggplot2
p_summary <- ggplot(mb_summary, aes(x = "", y = Count, fill = Category)) +
geom_col(width = 1, color = "white", size = 2) +
geom_text(aes(label = paste0(Count, "\n(", round(Percentage, 1), "%)")),
position = position_stack(vjust = 0.5),
size = 5, fontface = "bold", color = "white") +
coord_polar("y", start = 0) +
scale_fill_manual(values = c("#E69F00", "#56B4E9")) +
labs(
title = paste("Mushroom Body Calyx Neurons:", dataset),
subtitle = "Neurons with ≥100 synapses in MB calyx",
fill = "Cell Type"
) +
theme_void() +
theme(
plot.title = element_text(face = "bold", size = 14, hjust = 0.5),
plot.subtitle = element_text(size = 11, hjust = 0.5),
legend.position = "right"
)
# Save static plot
save_plot(p_summary, "mb_neurons_summary")
# Create interactive plotly pie chart (coord_polar doesn't convert well)
p_summary_plotly <- plot_ly(
mb_summary,
labels = ~Category,
values = ~Count,
type = 'pie',
textposition = 'inside',
textinfo = 'label+percent+value',
marker = list(colors = c("#E69F00", "#56B4E9"),
line = list(color = '#FFFFFF', width = 2)),
showlegend = TRUE
) %>%
layout(
title = list(
text = paste0("Mushroom Body Calyx Neurons: ", dataset,
"<br><sub>Neurons with ≥100 synapses in MB calyx</sub>"),
x = 0.5,
xanchor = 'center'
)
)
p_summary_plotly
Now try this analysis yourself with a different dataset!
Exercise: Switch to the malecns dataset
and compare results with BANC.
# To work with a different dataset, change the dataset variable at the top:
# dataset <- "malecns_09"
# dataset_id <- "malecns_09_id"
# Then re-run the entire notebook to see how the results differ!
# Differences likely reflect differences in annotation between projects
Note: Neurons missing from metadata are typically small fragments not assigned to full reconstructions.
Consider: - BANC = female fly, FlyWire reconstruction methods - maleCNS = male fly, Janelia reconstruction methods - Sex differences: Male flies may have different neuron numbers in some circuits - Reconstruction quality: Different methods may capture different neuron populations - Annotation standards: Despite harmonisation, some labelling differences remain
In this tutorial you learned how to:
✓ Access connectome data from Google Cloud Storage or local paths ✓
Use server-side filtering with Parquet for efficient GCS queries ✓ Apply
Arrow’s lazy evaluation for local Parquet files ✓ Load small metadata
files (.feather) into memory ✓ Explore neuron metadata and
hierarchical classifications ✓ Filter synapse data by brain region using
regex patterns ✓ Identify and characterise neurons by connectivity
patterns ✓ Create publication-quality visualisations ✓ Compare datasets
to identify biological vs technical variation
Next tutorial: 02_neuronal_morphology.Rmd - Load and visualise 3D neuron skeletons
sessionInfo()
## R version 4.2.1 (2022-06-23)
## Platform: x86_64-apple-darwin17.0 (64-bit)
## Running under: macOS Big Sur ... 10.16
##
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRlapack.dylib
##
## locale:
## [1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] reticulate_1.34.0 influencer_0.1.0 remotes_2.5.0
## [4] doSNOW_1.0.20 snow_0.4-4 iterators_1.0.14
## [7] foreach_1.5.2 readobj_0.4.1 dynamicTreeCut_1.63-1
## [10] lsa_0.73.3 SnowballC_0.7.0 tidygraph_1.2.3
## [13] ggraph_2.2.1 igraph_1.5.1 htmlwidgets_1.6.4
## [16] uwot_0.1.14 Matrix_1.6-1.1 umap_0.2.10.0
## [19] pheatmap_1.0.12 heatmaply_1.4.0 viridis_0.6.5
## [22] viridisLite_0.4.2 ggdendro_0.1.23 duckdb_0.9.2-1
## [25] DBI_1.2.3 plotly_4.11.0 nat.flybrains_1.8.2
## [28] nat.templatebrains_1.2.1 nat.nblast_1.6.7 nat_1.11.0
## [31] rgl_1.2.8 patchwork_1.1.3 forcats_0.5.2
## [34] stringr_1.6.0 dplyr_1.1.4 purrr_1.1.0
## [37] readr_2.1.5 tidyr_1.3.1 tibble_3.3.0
## [40] ggplot2_4.0.0.9000 tidyverse_1.3.2 arrow_16.1.0
##
## loaded via a namespace (and not attached):
## [1] readxl_1.4.1 backports_1.5.0 spam_2.10-0
## [4] systemfonts_1.2.3 plyr_1.8.9 lazyeval_0.2.2
## [7] crosstalk_1.2.0 digest_0.6.37 ca_0.71.1
## [10] htmltools_0.5.8.1 magrittr_2.0.4 memoise_2.0.1
## [13] googlesheets4_1.1.1 tzdb_0.4.0 graphlayouts_1.1.1
## [16] modelr_0.1.11 extrafont_0.18 extrafontdb_1.0
## [19] askpass_1.2.1 blob_1.2.4 rvest_1.0.3
## [22] rappdirs_0.3.3 ggrepel_0.9.5 textshaping_0.3.6
## [25] haven_2.5.1 xfun_0.54 crayon_1.5.3
## [28] jsonlite_2.0.0 glue_1.8.0 polyclip_1.10-4
## [31] registry_0.5-1 gtable_0.3.6 gargle_1.6.0
## [34] webshot_0.5.5 Rttf2pt1_1.3.11 scales_1.4.0
## [37] Rcpp_1.0.11 bit_4.6.0 dotCall64_1.1-1
## [40] httr_1.4.7 RColorBrewer_1.1-3 ellipsis_0.3.2
## [43] nabor_0.5.0 pkgconfig_2.0.3 farver_2.1.2
## [46] sass_0.4.8 dbplyr_2.2.1 utf8_1.2.6
## [49] labeling_0.4.3 tidyselect_1.2.1 rlang_1.1.6
## [52] cellranger_1.1.0 tools_4.2.1 cachem_1.1.0
## [55] cli_3.6.5 generics_0.1.4 RSQLite_2.3.4
## [58] broom_1.0.6 evaluate_1.0.5 fastmap_1.2.0
## [61] ragg_1.2.4 yaml_2.3.10 knitr_1.50
## [64] bit64_4.6.0-1 fs_1.6.3 filehash_2.4-6
## [67] dendroextras_0.2.3 nat.utils_0.6.1 dendextend_1.17.1
## [70] xml2_1.3.6 compiler_4.2.1 rstudioapi_0.17.1
## [73] png_0.1-8 reprex_2.0.2 tweenr_2.0.2
## [76] bslib_0.6.1 stringi_1.8.3 RSpectra_0.16-1
## [79] lattice_0.20-45 vctrs_0.6.5 pillar_1.11.1
## [82] lifecycle_1.0.4 jquerylib_0.1.4 data.table_1.16.2
## [85] seriation_1.4.0 R6_2.6.1 TSP_1.2-1
## [88] gridExtra_2.3 codetools_0.2-18 dichromat_2.0-0.1
## [91] MASS_7.3-58.1 assertthat_0.2.1 openssl_2.3.4
## [94] withr_3.0.2 parallel_4.2.1 hms_1.1.3
## [97] grid_4.2.1 rmarkdown_2.30 S7_0.2.0
## [100] googledrive_2.1.1 ggforce_0.4.1 lubridate_1.8.0
## [103] base64enc_0.1-3